468 research outputs found
A new ant colony optimization model for complex graph-based problems
Tesis doctoral inédita leída en la Universidad Autónoma de Madrid. Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura: julio de 2014Nowadays, there is a huge number of problems that due to their complexity have
employed heuristic-based algorithms to search for near-to-optimal (or even optimal)
solutions. These problems are usually NP-complete, so classical algorithms are not
the best candidates to address these problems because they need a large amount of
computational resources, or they simply cannot find any solution when the problem
grows. Some classical examples of these kind of problems are the Travelling Salesman
Problem (TSP) or the N-Queens problem. It is also possible to find examples in real and
industrial domains related to the optimization of complex problems, like planning,
scheduling, Vehicle Routing Problems (VRP), WiFi network Design Problem (WiFiDP)
or behavioural pattern identification, among others.
Regarding to heuristic-based algorithms, two well-known paradigms are Swarm
Intelligence and Evolutionary Computation. Both paradigms belongs to a subfield
from Artificial Intelligence, named Computational Intelligence that also contains
Fuzzy Systems, Artificial Neural Networks and Artificial Immune Systems areas.
Swarm Intelligence (SI) algorithms are focused on the collective behaviour of selforganizing
systems. These algorithms are characterized by the generation of collective
intelligence from non-complex individual behaviour and the communication schemes
amongst them. Some examples of SI algorithms are particle swarm optimization, ant
colony optimization (ACO), bee colony optimization o bird flocking.
Ant Colony Optimization (ACO) are based on the foraging behaviour of these insects.
In these kind of algorithms, the ants take different decisions during their execution
that allows them to build their own solution to the problem. Once any ant has
finished its execution, the ant goes back through the followed path and it deposits,
in the environment, pheromones that contains information about the built solution.
These pheromones will influence the decision of future ants, so there is an indirect
communication through the environment called stigmergy.
When an ACO algorithm is applied to any of the optimization problems just described,
the problem is usually modelled into a graph. Nevertheless, the classical graph-based
representation is not the best one for the execution of ACO algorithms because it
presents some important pitfalls. The first one is related to the polynomial, or even
exponential, growth of the resulting graph. The second pitfall is related to those
problems that needs from real variables because these problems cannot be modelled
using the classical graph-based representation.
On the other hand, Evolutionary Computation (EC) are a set of population-based
algorithms based in the Darwinian evolutionary process. In this kind of algorithms
there is one (or more) population composed by different individuals that represent a
possible solution to the problem. For each iteration, the population evolves by the use
of evolutionary procedures which means that better individuals (i.e. better solutions)
are generated along the execution of the algorithm. Both kind of algorithms, EC
and SI, have been traditionally applied in previous NP-hard problems. Different
population-based strategies have been developed, compared and even combined to
design hybrid algorithms.
This thesis has been focused on the analysis of classical graph-based representations
and its application in ACO algorithms into complex problems, and the development of
a new ACO model that tries to take a step forward in this kind of algorithms. In this
new model, the problem is represented using a reduced graph that affects to the ants
behaviour, which becomes more complex. Also, this size reduction generates a fast
growth in the number of pheromones created. For this reason, a new metaheuristic
(called Oblivion Rate) has been designed to control the number of pheromones stored
in the graph.
In this thesis different metaheuristics have been designed for the proposed system
and their performance have been compared. One of these metaheuristics is the
Oblivion Rate, based on an exponential function that takes into account the number
of pheromones created in the system. Other Oblivion Rate function is based on a bioinspired
swarm algorithm that uses some concepts extracted from the evolutionary
algorithms. This bio-inspired swarm algorithm is called Coral Reef Opmization (CRO)
algorithm and it is based on the behaviour of the corals in a reef.
Finally, to test and validate the proposed model, different domains have been used
such as the N-Queens Problem, the Resource-Constraint Project Scheduling Problem,
the Path Finding problem in Video Games, or the Behavioural Pattern Identification
in users. In some of these domains, the performance of the proposed model has been
compared against a classical Genetic Algorithm to provide a comparative study and
perform an analytical comparison between both approaches.En la actualidad, existen un gran número de problemas que debido a su complejidad
necesitan algoritmos basados en heurísticas para la búsqueda de solucionas subóptimas
(o incluso óptimas). Normalmente, estos problemas presentan una complejidad
NP-completa, por lo que los algoritmos clásicos de búsqueda de soluciones no son
apropiados ya que necesitan una gran cantidad de recursos computacionales, o simplemente,
no son capaces de encontrar alguna solución cuando el problema crece. Ejemplos
clásicos de este tipo de problemas son el problema del vendedor viajero (o TSP
del inglés Travelling Salesman Problem) o el problema de las N-reinas. También se
pueden encontrar ejemplos en dominios reales o industriales que generalmente están
ligados a temas de optimización de sistemas complejos, como pueden ser problemas de
planificación, scheduling, problemas de enrutamiento de vehículos (o VRP del inglés
Vehicle Routing Problem), el diseño de redes Wifi abiertas (o WiFiDP del inglés WiFi
network Design Problem), o la identificación de patrones de comportamiento, entre
otros.
En lo referente a los algoritmos basados en heuristicas, dos paradigmas muy
conocidos son los algoritmos de enjambre (Swarm Intelligence) y la computación
evolutiva (Evolutionary Computation). Ambos paradigmas pertencen al subárea de la
Inteligencia Artificial denominada Inteligencia Computacional, que además contiene
los sistemas difusos, redes neuronales y sistemas inmunológicos artificiales.
Los algoritmos de inteligencia de enjambre, o Swarm Intelligence, se centran en
el comportamiento colectivo de sistemas auto-organizativos. Estos algoritmos se
caracterizan por la generación de inteligencia colectiva a partir del comportamiento,
no muy complejo, de los individuos y los esquemas de comunicación entre ellos.
Algunos ejemplos son particle swarm optimization, ant colony optimization (ACO),
bee colony optimization o bird flocking.
Los algoritmos de colonias de hormigas (o ACO del inglés Ant Colony Optimization)
se basan en el comportamiento de estos insectos en el proceso de recolección de
comida. En este tipo de algoritmos, las hormigas van tomando decisiones a lo largo
de la simulación que les permiten construir su propia solución al problema. Una
vez que una hormiga termina su ejecución, deshace el camino andado depositando en
el entorno feronomas que contienen información sobre la solución construida. Estas
feromonas influirán en las decisiones de futuras hormigas, por lo que produce una
comunicación indirecta utilizando el entorno. A este proceso se le llama estigmergia.
Cuando un algoritmo de hormigas se aplica a alguno de los problemas de optimización
descritos anteriormente, se suele modelar el problema como un grafo sobre el cual
se ejecutarán las hormigas. Sin embargo, la representación basada en grafos
clásica no parece ser la mejor para la ejecución de algoritmos de hormigas porque
presenta algunos problemas importantes. El primer problema está relacionado con
el crecimiento polinómico, o incluso expnomencial, del grafo resultante. El segundo
problema tiene que ver con los problemas que necesitan de variables reales, o de coma
flotante, porque estos problemas, con la representación tradicional basada en grafos,
no pueden ser modelados.
Por otro lado, los algoritmos evolutivos (o EC del inglés Evolutionary Computation)
son un tipo de algoritmos basados en población que están inspirados en el
proceso evolutivo propuesto por Darwin. En este tipo de algoritmos, hay una, o
varias, poblaciones compuestas por individuos diferentes que representan problems
solutiones al problema modelado. Por cada iteración, la población evoluciona mediante
el uso de procedimientos evolutivos, lo que significa que mejores individuos (mejores
soluciones) son creados a lo largo de la ejecución del algoritmo. Ambos tipos de
algorithmos, EC y SI, han sido tradicionalmente aplicados a los problemas NPcompletos
descritos anteriormente. Diferentes estrategias basadas en población han
sido desarrolladas, comparadas e incluso combinadas para el diseño de algoritmos
híbridos.
Esta tesis se ha centrado en el análisis de los modelos clásicos de representación
basada en grafos de problemas complejos para la posterior ejecución de algoritmos
de colonias de hormigas y el desarrollo de un nuevo modelo de hormigas que pretende
suponer un avance en este tipo de algoritmos. En este nuevo modelo, los problemas
son representados en un grafo más compacto que afecta al comportamiento de las
hormigas, el cual se vuelve más complejo. Además, esta reducción en el tamaño
del grafo genera un rápido crecimiento en el número de feronomas creadas. Por
esta razón, una nueva metaheurística (llamada Oblivion Rate) ha sido diseñada para
controlar el número de feromonas almacenadas en el grafo.
En esta tesis, varias metaheuristicas han sido diseñadas para el sistema propuesto y
sus rendimientos han sido comparados. Una de estas metaheurísticas es la Oblivion
Rate basada en una función exponencial que tiene en cuenta el número de feromonas
creadas en el sistema. Otra Oblivion Rate está basada en un algoritmo de enjambre
bio-inspirado que usa algunos conceptos extraídos de la computación evolutiva. Este
algoritmo de enjambre bio-inspirado se llama Optimización de arrecifes de corales (o
CRO del inglés Coral Reef Optimization) y está basado en el comportamiento de los
corales en el arrecife.
Finalmente, para validar y testear el modelo propuesto, se han utilizado diversos
dominios de aplicación como son el problema de las N-reinas, problemas de
planificación de proyectos con restricciones de recursos, problemas de búsqueda de
caminos en entornos de videojuegos y la identificación de patrones de comportamiento
de usuarios. En algunos de estos dominios, el rendimiento del modelo propuesto
ha sido comparado contra un algoritmo genético clásico para realizar un estudio
comparativo, y analítico, entre ambos enfoques
Environmental influence in bio-inspired game level solver algorithms
Proceedings of the 7th International Symposium on Intelligent Distributed Computing - IDC 2013, Prague, Czech Republic, September 2013The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-01571-2-19Bio-inspired algorithms have been widely used to solve problems in areas like heuristic search, classical optimization, or optimum configuration in complex systems. This paper studies how Genetic Algorithms (GA) and Ant Colony Optimization (ACO) algorithms can be applied to automatically solve levels in the well known Lemmings Game. The main goal of this work is to study the influence that the environment exerts over these algorithms, specially when the goal of the selected game is to save an individual (lemming) that should take into account their environment to improve their possibilities of survival. The experimental evaluations carried out reveals that the performance of the algorithm (i.e. number of paths found) is improve when the algorithm uses a small quantity of information about the environment.This work has been supported by the Spanish Ministry of Science and
Innovation under grant TIN2010-19872
Behaviour-based identification of student communities in virtual worlds
VirtualWorlds (VW) have gained popularity in the last years in domains like training or education mainly due to their highly immersive and interactive 3D characteristics. In these platforms, the user (represented by an avatar) can move and interact in an artificial world with a high degree of freedom. They can talk, chat, build and design objects, program and compile their own developed programs, or move (flying, teleporting, walking or running) to different parts of the world. Although these environments provide an interesting working place for students and educators, VW platforms (such as OpenCobalt or OpenSim amongst others) rarely provide mechanisms to facilitate the automatic (or semi-automatic) behaviour analysis of users interactions. Using a VW platform called VirtUAM, the information extracted from different experiments are used to analyse and define students communities based on their behaviour. To define the individual student behaviour, different characteristics are extracted from the system, such as the avatar position (in form of GPS coordinates) and the set of actions (interactions) performed by students within the VW. Later this information is used to automatically detect behavioural patterns. This paper shows how this information can be used to group students in different communities based on their behaviour. Experimental results show how community identification can be successfully perform using K-Means algorithm and Normalized Compression Distance. Resulting communities contains users working in near places or with similar behaviours inside the virtual world.This work has been funded by the Spanish Ministry of Science and Innovation
under the project ABANT (TIN2010-19872/TSI)
Estudio de la influencia de topologías de comunicación en sistemas multiagente bioinspirados
Máster en Ingeniería en Informática y de Telecomunicacione
Estudio de un sistema móvil de televisión con soporte de pago por visión a través de redes de próxima generación
Las redes de próxima generación, o NGN, es la tecnología que relevará al actual sistema de tercera generación, 3G. Estas redes están basadas en tecnología IP y proporcionan servicios de datos, telefonía y multimedia. El principal objetivo es mezclar la tecnología IP con los dispositivos móviles. Desde el punto de vista de los operadores, las redes NGN permiten la integración en la red de servicios desarrollados por terceras personas. Y además el proceso de desarrollo de un nuevo servicio es más rápido y barato. Desde el punto de vista del usuario, ofrece un amplio rango de servicios que podrán ser utilizados desde cualquier dispositivo con acceso a Internet. Este dispositivo puede ser un teléfono móvil, el ordenador de la oficina o el ordenador de su casa. El objetivo de este documento es describir el estudio de un sistema de televisión móvil basado en redes de próxima generación. El documento expone un estado del arte sobre las tecnologías relacionadas con este proyecto y el proceso de desarrollo del proyecto. Finalmente, se presentarán los resultados del estudio y desarrollo del proyecto y se propondrán unas futuras líneas de desarrollo. ________________________________________________Next Generation Networking, or NGN, is the technology that will replace
the current third generation system, 3G. These networks are based on IP
technology and provide data, voice and multimedia services. The aim is the
merge of IP technology and the mobile devices. From the operators point of
view, NGN allows to integrate in the network services developed by third
parties. And also, the development process of a new service is faster and
cheaper. From the users point of view, it allows a wide range of services that
will be used from any device with access to Internet. This device could be a
mobile phone, an office computer or a personal computer.
The aim of this document is to describe the study of a mobile television
system based on Next Generation Networking. The document presents the
state-of-the-art about the technologies related with this project and the development
process of it. Finally, the results of the study and development
of the project are explained, and the future ways of development will be
proposed.Ingeniería en Informátic
A multi-agent traffic simulation framework for evaluating the impact of traffic lights
This is an electronic version of the paper presented at the 3rd International Conference on Agents and Artificial Intelligence, held in Rome on 2011The growing of the number of vehicles cause serious strains on road infrastructures. Traffic jams inevitably
occur, wasting time and money for both cities and their drivers. To mitigate this problem, traffic simulation
tools based on multiagent techniques can be used to quickly prototype potentially problematic scenarios to
better understand their inherent causes. This work centers around the effects of traffic light configuration
on the flow of vehicles in a road network. To do so, a Multi-Agent Traffic Simulation Framework based on
Particle Swarm Optimization techniques has been designed and implemented. Experimental results from this
framework show an improvement in the average speed obtained by traffic controlled by adaptive over static
traffic lights.This work has been supported by the Spanish Ministry
of Science and Innovation. Grant TIN2010-
1987
A multi-agent simulation platform applied to the study of urban traffic lights
Proceedings of 6th International Conference on Software and Data Technologies, ICSOFT 2011The Multi-Agent system paradigm allows the development of complex software platforms to be used in a wide range of real-world scenarios. One of the most successful areas these technologies have been applied are in the simulation and optimization of complex systems. Traffic simulation/optimization problems are a specially suitable target for such a platform. This paper proposes a new Multi-Agent simulation platform, where agents are based on a Swarm model (lightweight agents with very low autonomy or proactivity). Using this framework, simulation designers are free to configure road networks of arbitrary complexity, by customizing road width, geometry and intersection with other roads. To simulate different traffic flow scenarios, vehicle trajectories can be defined by choosing start and end locations and providing traffic generation functions for each one trajectory defined. Finally, how many vehicles are generated at each time step can be determined by a time series function. The domain of traffic simulation has been selected to investigate the effect of traffic light configuration on the flow of vehicles in a road network. The experimental results from this platform show a strong correlation between traffic light behavior and the flow of traffic through the network that affects the congestion of the road.This work has been partially supported by the Spanish
Ministry of Science and Innovation under grant
TIN2010-19872 and by Jobssy.com
Comparación de la sensibilidad a herbicidas de cinco variedades de trigo candeal (Triticum durum)
El trigo candeal constituye una interesante alternativa al trigo pan siendo su manejo de malezas un impedimento para la adopción del mismo. El trigo duro responde de manera diferencial a herbicidas post-emergentes comparado al trigo pan, siendo frecuentemente usados en post-emergencia los del tipo auxínicos, inhibidores de ALS y ACCasa, quienes muestran mayor fitotoxicidad sobre trigo candeal. El objetivo de este trabajo fue comparar la sensibilidad de cinco variedades de trigo candeal a diferentes herbicidas selectivos en pruebas de germinación y crecimiento plumular en condiciones controladas. El ensayo se realizó con las variedades Facón, Quillén, Cariló y las líneas experimentales 731 y 735 (Chacra Experimental Barrow). Los herbicidas estudiados fueron pinoxaden, iodosulfuron-mesosulfuron+metsulfuron, pyroxsulam, flucarbazone, dicamba y picloram. Se sembraron 60 semillas de cada cultivar en bandejas y se colocó papel de filtro humedecido con 40 ml de: 0-0,01- 0,1-1 y 10 µM. Se evaluó periódicamente porcentaje de germinación (PG) y crecimiento plumular (CP). Los datos fueron analizados por ANOVA y diferencias entre medias por Tukey (p≤0,05). Los resultados indican que todas las variedades presentaron mayor sensibilidad al CP con respecto al PG al ser sometidas a las diferentes dosis de herbicidas. Facón manifestó los menores registros de PG y CP frente a todos los herbicidas y dosis evaluadas siendo 735 quien manmifesto los mayores valores. Las restantes variedades tuvieron comportamiento intermedio, con variaciones entre tratamientos y herbicidas. Los herbicidas auxinicos, presentaron patrón similar afectando CP, estimulando a concentraciones medias e inhibiendo con la dosis máxima. Los resultados obtenidos manifiestan respuesta diferencial de las variedades frente a distintos herbicidas selectivos. Las variaciones intra-específicas permitirían obtener evidencias para sentar bases de estrategias de manejo tendientes a minimizar los riesgos de fitotoxicidad sobre el cultivo. Se compararon cinco materiales de trigo candeal. Tres de ellas, BI Facón, BI Quillén y BI Cariló, son variedades ampliamente cultivadas en la región SE de la provincia de Buenos Aires y dos líneas experimentales 731 y 735 que se lanzarán al mercado en la campaña 2019/2020 bajo el nombre de BI Galpón (731) y BI Charito (735) producto del programa de mejoramiento de la Chacra Experimental Integrada Barrow. Los herbicidas utilizados: Pinoxaden, Iodosulfuron-mesosulfuron + metsulfuron, Pyroxsulam, Flucarbazone, Dicamba, Picloram.Facultad de Ciencias Agrarias y Forestale
Distributed parameter tuning for genetic algorithms
Genetic Algorithms (GA) is a family of search algorithms based
on the mechanics of natural selection and biological evolution. They are
able to efficiently exploit historical information in the evolution process to
look for optimal solutions or approximate them for a given problem, achieving
excellent performance in optimization problems that involve a large
set of dependent variables. Despite the excellent results of GAs, their use
may generate new problems. One of them is how to provide a good fitting
in the usually large number of parameters that must be tuned to allow a
good performance.
This paper describes a new platform that is able to extract the Regular
Expression that matches a set of examples, using a supervised learning
and agent-based framework. In order to do that, GA-based agents decompose
the GA execution in a distributed sequence of operations performed
by them. The platform has been applied to Language induction problem,
for that reason the experiments are focused on the extraction of the regular
expression that matches a set of examples. Finally, the paper shows
the efficiency of the proposed platform (in terms of fitness value) applied
to three case studies: emails, phone numbers and URLs. Moreover, it is
described how the codification of the alphabet affects to the performance
of the platform.This work has been partially supported by the Spanish Ministry of Science
and Innovation under the projects COMPUBIODIVE(TIN2007-65989), V-LeaF
(TIN2008-02729-E/TIN), Castilla-La Mancha project PEII09-0266-6640 and HADA
(TIN2007-64718)
Variation of the relative biological effectiveness with fractionation in proton therapy: analysis of prostate cancer response
Purpose: To present a methodology to analyze the variation of RBE with
fractionation from clinical data of tumor control probability (TCP) and to
apply it to study the response of prostate cancer to proton therapy.
M&M: We analyzed the dependence of the RBE on the dose per fraction by using
the LQ model and the Poisson TCP formalism. Clinical TCPs for prostate cancer
treated with photon and proton therapy for conventional fractionation (2
Gy(RBE)x37 fractions), moderate hypofractionation (3 Gy(RBE)x20 fractions) and
hypofractionation (7.25 Gy(RBE)x5 fractions) were obtained from the literature
and analyzed.
Results: The theoretical analysis showed three distinct regions with RBE
monotonically decreasing, increasing or staying constant with the dose per
fraction, depending on the change of ({\alpha}, \{beta}) values between photon
and proton irradiation (the equilibrium point being
at({\alpha}_p/\{beta}_p)=({\alpha}_X/\{beta}_X)({\alpha}_X/{\alpha}_p)). An
analysis of the clinical data showed RBE values that decline with increasing
dose per fraction: for low risk RBE=1.124, 1.119, and 1.102 for 1.82 Gy, 2.73
Gy and 6.59 Gy per fraction (physical proton doses), respectively; for
intermediate risk RBE=1.119, and 1.102 for 1.82 Gy, and 6.59 Gy per fraction
(physical proton doses), respectively. These values are nonetheless very close
to the nominal 1.1 value.
Conclusions: We presented a methodology to analyze the RBE for different
fractionations, and we used it to study clinical data for prostate cancer. The
analysis shows a monotonically decreasing RBE with increasing dose per
fraction, which is expected from the LQ formalism and the changes in ({\alpha},
\{beta}) between photon and proton irradiation. However, the calculations in
this study have to be considered with care as they may be biased by limitations
in the modeling and/or by the clinical data set used for the analysis.Comment: Minor changes to match accepted manuscript; in press Medical Physic
- …